Goto

Collaborating Authors

 critical thinking


Governing the rise of interactive AI will require behavioral insights

AIHub

AI is no longer just a translator or image recognizer. Today, we engage with systems that remember our preferences, proactively manage our calendars, and even provide emotional support. They build ongoing bonds with users. They change their behavior based on our habits. They don't just wait for commands; they suggest next steps.


Biased by Design: Leveraging AI Biases to Enhance Critical Thinking of News Readers

Zavolokina, Liudmila, Sprenkamp, Kilian, Katashinskaya, Zoya, Jones, Daniel Gordon

arXiv.org Artificial Intelligence

This paper explores the design of a propaganda detection tool using Large Language Models (LLMs). Acknowledging the inherent biases in AI models, especially in political contexts, we investigate how these biases might be leveraged to enhance critical think ing in news consumption. Countering the typical view of AI biases as detrimental, our research proposes strategies of user choice and personalization in response to a user's political stance, applying psychological concepts of confirmation bias and cogniti ve dissonance.


Are we living in a golden age of stupidity?

The Guardian

Are we living in a golden age of stupidity? S tep into the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge, US, and the future feels a little closer. Glass cabinets display prototypes of weird and wonderful creations, from tiny desktop robots to a surrealist sculpture created by an AI model prompted to design a tea set made from body parts. In the lobby, an AI waste-sorting assistant named Oscar can tell you where to put your used coffee cup. Five floors up, research scientist Nataliya Kosmyna has been working on wearable brain-computer interfaces she hopes will one day enable people who cannot speak, due to neurodegenerative diseases such as amyotrophic lateral sclerosis, to communicate using their minds. Kosmyna spends a lot of her time reading and analysing people's brain states.


Socratic Mind: Impact of a Novel GenAI-Powered Assessment Tool on Student Learning and Higher-Order Thinking

Lee, Jeonghyun, Hung, Jui-Tse, Soylu, Meryem Yilmaz, Popescu, Diana, Cui, Christopher Zhang, Grigoryan, Gayane, Joyner, David A, Harmon, Stephen W

arXiv.org Artificial Intelligence

This study examines the impact of Socratic Mind, a Generative Artificial Intelligence (GenAI) powered formative assessment tool that employs Socratic questioning to support student learning in a large, fully online undergraduate-level computing course. Employing a quasi-experimental, mixed-methods design, we investigated participants' engagement patterns, the influence of user experience on engagement, and impacts on both perceived and actual learning outcomes. Data were collected from the system logs, surveys on user experience and perceived engagement and learning gains, student reflections, and course performance data. Results indicated that participants consistently reported high levels of affective, behavioral, and cognitive engagement, and these were strongly linked to positive user experiences and perceived learning outcomes. Quantitative analysis further revealed that students who engaged with the GenAI tool experienced significant gains in their quiz scores compared to those who did not, particularly benefiting students with lower baseline achievement. Additionally, thematic analysis of qualitative feedback revealed substantial perceived improvements in higher-order thinking skills, including problem solving, critical thinking, and self-reflection. Our findings highlight the promise of AI-mediated dialogue in fostering deeper engagement and higher-order cognitive skills. As higher education institutions expand GenAI integration in curriculum, this dialogic, GenAI powered assessment tool can offer a scalable strategy to promote students' meaningful learning outcomes.


Toward LLM-Supported Automated Assessment of Critical Thinking Subskills

Peczuh, Marisa C., Kumar, Nischal Ashok, Baker, Ryan, Lehman, Blair, Eisenberg, Danielle, Mills, Caitlin, Chebrolu, Keerthi, Nashi, Sudhip, Young, Cadence, Liu, Brayden, Lachman, Sherry, Lan, Andrew

arXiv.org Artificial Intelligence

Critical thinking represents a fundamental competency in today's education landscape. Developing critical thinking skills through timely assessment and feedback is crucial; however, there has not been extensive work in the learning analytics community on defining, measuring, and supporting critical thinking. In this paper, we investigate the feasibility of measuring core "subskills" that underlie critical thinking. We ground our work in an authentic task where students operationalize critical thinking: student-written argumentative essays. We developed a coding rubric based on an established skills progression and completed human coding for a corpus of student essays. We then evaluated three distinct approaches to automated scoring: zero-shot prompting, few-shot prompting, and supervised fine-tuning, implemented across three large language models (GPT-5, GPT-5-mini, and ModernBERT). GPT-5 with few-shot prompting achieved the strongest results and demonstrated particular strength on subskills with separable, frequent categories, while lower performance was observed for subskills that required detection of subtle distinctions or rare categories. Our results underscore critical trade-offs in automated critical thinking assessment: proprietary models offer superior reliability at higher cost, while open-source alternatives provide practical accuracy with reduced sensitivity to minority categories. Our work represents an initial step toward scalable assessment of higher-order reasoning skills across authentic educational contexts.


AI-driven formative assessment and adaptive learning in data-science education: Evaluating an LLM-powered virtual teaching assistant

Anaroua, Fadjimata I, Li, Qing, Tang, Yan, Liu, Hong P.

arXiv.org Artificial Intelligence

This paper presents VITA (Virtual Teaching Assistants), an adaptive distributed learning (ADL) platform that embeds a large language model (LLM)-powered chatbot (BotCaptain) to provide dialogic support, interoperable analytics, and integrity-aware assessment for workforce preparation in data science. The platform couples context-aware conversational tutoring with formative-assessment patterns designed to promote reflective reasoning. The paper describes an end-to-end data pipeline that transforms chat logs into Experience API (xAPI) statements, instructor dashboards that surface outliers for just-in-time intervention, and an adaptive pathway engine that routes learners among progression, reinforcement, and remediation content. The paper also benchmarks VITA conceptually against emerging tutoring architectures, including retrieval-augmented generation (RAG)--based assistants and Learning Tools Interoperability (LTI)--integrated hubs, highlighting trade-offs among content grounding, interoperability, and deployment complexity. Contributions include a reusable architecture for interoperable conversational analytics, a catalog of patterns for integrity-preserving formative assessment, and a practical blueprint for integrating adaptive pathways into data-science courses. The paper concludes with implementation lessons and a roadmap (RAG integration, hallucination mitigation, and LTI~1.3 / OpenID Connect) to guide multi-course evaluations and broader adoption. In light of growing demand and scalability constraints in traditional instruction, the approach illustrates how conversational AI can support engagement, timely feedback, and personalized learning at scale. Future work will refine the platform's adaptive intelligence and examine applicability across varied educational settings.


I'm a High Schooler. AI Is Demolishing My Education.

The Atlantic - Technology

AI has transformed my experience of education. I am a senior at a public high school in New York, and these tools are everywhere. I do not want to use them in the way I see other kids my age using them--I generally choose not to--but they are inescapable. During a lesson on the Narrative of the Life of Frederick Douglass, I watched a classmate discreetly shift in their seat, prop their laptop up on a crossed leg, and highlight the entirety of the chapter under discussion. In seconds, they had pulled up ChatGPT and dropped the text into the prompt box, which spat out an AI-generated annotation of the chapter.


Experts warn AI stuffed animals could 'fundamentally change' human brain wiring in kids

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Do AI chatbots packaged inside plush animals really help children, or do they threaten vital developmental milestones? Companies market them as "screen-free playmates" for toddlers, but pediatric experts warn these toys could trade human connection for machine conversation. Toys like Grem, Grok and Rudi are designed to bond with kids through voice and conversation.


AI and learning retention: Does ChatGPT help or hurt?

FOX News

'The CyberGuy' Kurt Knutsson joins'Fox & Friends Weekend' to discuss the potential effects of artificial intelligence software like ChatGPT on the brain. Artificial intelligence (AI) and large language models (LLMs), such as ChatGPT, are transforming how we learn. But what does this mean for AI and learning retention? While these tools provide instant answers and personalized support, experts are beginning to question whether this convenience might actually reduce our ability to retain knowledge in the long term. Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox.


Is Using ChatGPT to Write Your Essay Bad for Your Brain? New MIT Study Explained.

TIME - Tech

TIME reporter Andrew Chow discussed the findings of a new study about how ChatGPT affects critical thinking with Nataliya Kosymyna. Kosymyna was part of a team of researchers at MIT's Media Lab who set out to determine whether ChatGPT and large language models (LLMs) are eroding critical thinking, and the study returned some concerning results. The study divided 54 subjects into three groups, and asked them to write several essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity. What they found was that of the three groups, the ChatGPT users had the lowest brain engagement and consistently underperformed at neural, linguistic and behavioral levels.